114 research outputs found

    Soft-TTL: Time-Varying Fractional Caching

    Get PDF
    Standard Time-to-Live (TTL) cache management prescribes the storage of entire files, or possibly fractions thereof, for a given amount of time after a request. As a generalization of this approach, this work proposes the storage of a time-varying, diminishing, fraction of a requested file. Accordingly, the cache progressively evicts parts of the file over an interval of time following a request. The strategy, which is referred to as soft-TTL, is justified by the fact that traffic traces are often characterized by arrival processes that display a decreasing, but non-negligible, probability of observing a request as the time elapsed since the last request increases. An optimization-based analysis of soft-TTL is presented, demonstrating the important role played by the hazard function of the inter-arrival request process, which measures the likelihood of observing a request as a function of the time since the most recent request

    On Optimal Geographical Caching in Heterogeneous Cellular Networks

    Get PDF
    In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit probability, which is the probability that a user at an arbitrary location in the plane will find the content that it requires in one of the BSs that it is covered by. We consider the problem of optimally placing content in all BSs jointly. As this problem is not convex, we provide a heuristic scheme by finding the optimal placement policy for one type of base station conditioned on the placement in all other types. We demonstrate that these individual optimization problems are convex and we provide an analytical solution. As an illustration, we find the optimal placement policy of the small base stations (SBSs) depending on the placement policy of the macro base stations (MBSs). We show how the hit probability evolves as the deployment density of the SBSs varies. We show that the heuristic of placing the most popular content in the MBSs is almost optimal after deploying the SBSs with optimal placement policies. Also, for the SBSs no such heuristic can be used; the optimal placement is significantly better than storing the most popular content. Finally, we show that solving the individual problems to find the optimal placement policies for different types of BSs iteratively, namely repeatedly updating the placement policies, does not improve the performance.Comment: The article has 6 pages, 7 figures and is accepted to be presented at IEEE Wireless Communications and Networking Conference (WCNC) 2017, 19 - 22 March 2017, San Francisco, CA, US

    A Low-Complexity Approach to Distributed Cooperative Caching with Geographic Constraints

    Get PDF
    We consider caching in cellular networks in which each base station is equipped with a cache that can store a limited number of files. The popularity of the files is known and the goal is to place files in the caches such that the probability that a user at an arbitrary location in the plane will find the file that she requires in one of the covering caches is maximized. We develop distributed asynchronous algorithms for deciding which contents to store in which cache. Such cooperative algorithms require communication only between caches with overlapping coverage areas and can operate in asynchronous manner. The development of the algorithms is principally based on an observation that the problem can be viewed as a potential game. Our basic algorithm is derived from the best response dynamics. We demonstrate that the complexity of each best response step is independent of the number of files, linear in the cache capacity and linear in the maximum number of base stations that cover a certain area. Then, we show that the overall algorithm complexity for a discrete cache placement is polynomial in both network size and catalog size. In practical examples, the algorithm converges in just a few iterations. Also, in most cases of interest, the basic algorithm finds the best Nash equilibrium corresponding to the global optimum. We provide two extensions of our basic algorithm based on stochastic and deterministic simulated annealing which find the global optimum. Finally, we demonstrate the hit probability evolution on real and synthetic networks numerically and show that our distributed caching algorithm performs significantly better than storing the most popular content, probabilistic content placement policy and Multi-LRU caching policies.Comment: 24 pages, 9 figures, presented at SIGMETRICS'1

    Network Coding: Exploiting Broadcast and Superposition in Wireless Networks

    Get PDF
    In this thesis we investigate improvements in efficiency of wireless communication networks, based on methods that are fundamentally different from the principles that form the basis of state-of-the-art technology. The first difference is that broadcast and superposition are exploited instead of reducing the wireless medium to a network of point-to-point links. The second difference is that the problem of transporting information through the network is not treated as a flow problem. Instead we allow for network coding to be used.\ud \ud First, we consider multicast network coding in settings where the multicast configuration changes over time. We show that for certain problem classes a universal network code can be constructed. One application is to efficiently tradeoff throughput against cost.\ud \ud Next, we deal with increasing energy efficiency by means of network coding in the presence of broadcast. It is demonstrated that for multiple unicast traffic in networks with nodes arranged on two and three dimensional rectangular lattices, network coding can reduce energy consumption by factors of four and six, respectively, compared to routing.\ud \ud Finally, we consider the use of superposition by allowing nodes to decode sums of messages. We introduce different deterministic models of wireless networks, representing various ways of handling broadcast and superposition. We provide lower and upper bounds on the transport capacity under these models. For networks with nodes arranged on a hexagonal lattice it is found that the capacity under a model exploiting both broadcast and superposition is at least 2.5 times, and no more than six times, the transport capacity under a model of point-to-point links

    Sign-Compute-Resolve for Tree Splitting Random Access

    Get PDF
    We present a framework for random access that is based on three elements: physical-layer network coding (PLNC), signature codes and tree splitting. In presence of a collision, physical-layer network coding enables the receiver to decode, i.e. compute, the sum of the packets that were transmitted by the individual users. For each user, the packet consists of the user's signature, as well as the data that the user wants to communicate. As long as no more than K users collide, their identities can be recovered from the sum of their signatures. This framework for creating and transmitting packets can be used as a fundamental building block in random access algorithms, since it helps to deal efficiently with the uncertainty of the set of contending terminals. In this paper we show how to apply the framework in conjunction with a tree-splitting algorithm, which is required to deal with the case that more than K users collide. We demonstrate that our approach achieves throughput that tends to 1 rapidly as K increases. We also present results on net data-rate of the system, showing the impact of the overheads of the constituent elements of the proposed protocol. We compare the performance of our scheme with an upper bound that is obtained under the assumption that the active users are a priori known. Also, we consider an upper bound on the net data-rate for any PLNC based strategy in which one linear equation per slot is decoded. We show that already at modest packet lengths, the net data-rate of our scheme becomes close to the second upper bound, i.e. the overhead of the contention resolution algorithm and the signature codes vanishes.Comment: This is an extended version of arXiv:1409.6902. Accepted for publication in the IEEE Transactions on Information Theor
    • …
    corecore